我们提出了一种新颖的方法,可以在没有直接监督或对困难的注释的情况下确定视觉问题回答(VQA)的难度。先前的工作已经考虑了人类注释者的基础答案的多样性。相反,我们根据多个不同VQA模型的行为分析了视觉问题的难度。我们建议通过三个不同的模型获得预测的答案分布的熵值:一种基线方法,该方法将作为输入图像和问题采用,以及两个仅作为输入图像和仅提出问题的变体。我们使用简单的K-均值来聚集VQA V2验证集的视觉问题。然后,我们使用最先进的方法来确定每个集群的答案分布的准确性和熵。提出的方法的一个好处是,不需要对难度的注释,因为每个集群的准确性反映了属于它的视觉问题的难度。我们的方法可以识别出难以通过最新方法正确回答的困难视觉问题的集群。对VQA V2数据集的详细分析表明,1)所有方法在最困难的群集上表现出较差的性能(大约10 \%精度),2)随着群集难度的增加,不同方法预测的答案开始差异,3 )聚类熵的值与群集精度高度相关。我们表明,我们的方法具有能够在没有地面真相的情况下评估视觉问题的难度(\ ie,VQA V2的测试集),通过将它们分配给其中一个簇来评估视觉问题的难度。我们希望这可以刺激研究和新算法的新方向发展。
translated by 谷歌翻译
Knowledge representation and reasoning in law are essential to facilitate the automation of legal analysis and decision-making tasks. In this paper, we propose a new approach based on legal science, specifically legal taxonomy, for representing and reasoning with legal documents. Our approach interprets the regulations in legal documents as binary trees, which facilitates legal reasoning systems to make decisions and resolve logical contradictions. The advantages of this approach are twofold. First, legal reasoning can be performed on the basis of the binary tree representation of the regulations. Second, the binary tree representation of the regulations is more understandable than the existing sentence-based representations. We provide an example of how our approach can be used to interpret the regulations in a legal document.
translated by 谷歌翻译
Adversarial attacks on thermal infrared imaging expose the risk of related applications. Estimating the security of these systems is essential for safely deploying them in the real world. In many cases, realizing the attacks in the physical space requires elaborate special perturbations. These solutions are often \emph{impractical} and \emph{attention-grabbing}. To address the need for a physically practical and stealthy adversarial attack, we introduce \textsc{HotCold} Block, a novel physical attack for infrared detectors that hide persons utilizing the wearable Warming Paste and Cooling Paste. By attaching these readily available temperature-controlled materials to the body, \textsc{HotCold} Block evades human eyes efficiently. Moreover, unlike existing methods that build adversarial patches with complex texture and structure features, \textsc{HotCold} Block utilizes an SSP-oriented adversarial optimization algorithm that enables attacks with pure color blocks and explores the influence of size, shape, and position on attack performance. Extensive experimental results in both digital and physical environments demonstrate the performance of our proposed \textsc{HotCold} Block. \emph{Code is available: \textcolor{magenta}{https://github.com/weihui1308/HOTCOLDBlock}}.
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
当面对复杂的语义环境和各种孔模式时,现有的基于学习的图像介绍方法仍在挑战。从大规模培训数据中学到的先前信息仍然不足以解决这些情况。捕获的覆盖相同场景的参考图像与损坏的图像共享相似的纹理和结构先验,该图像为图像授课任务提供了新的前景。受此启发的启发,我们首先构建了一个基准数据集,其中包含10k对的输入和参考图像,以引入引导介绍。然后,我们采用编码器解码器结构来分别推断输入图像的纹理和结构特征,考虑其在indpaining期间的纹理和结构差异。进一步设计特征对齐模块,以通过参考图像的指导来完善输入图像的这些特征。定量和定性评估都证明了我们方法在完成复杂孔方面的优越性。
translated by 谷歌翻译
大多数深度度量学习(DML)方法采用了一种策略,该策略迫使所有积极样本在嵌入空间中靠近,同时使它们远离负面样本。但是,这种策略忽略了正(负)样本的内部关系,并且通常导致过度拟合,尤其是在存在硬样品和标签错误的情况下。在这项工作中,我们提出了一个简单而有效的正则化,即列表自我验证(LSD),该化逐渐提炼模型的知识,以适应批处理中每个样本对的更合适的距离目标。LSD鼓励在正(负)样本中更平稳的嵌入和信息挖掘,以减轻过度拟合并从而改善概括。我们的LSD可以直接集成到一般的DML框架中。广泛的实验表明,LSD始终提高多个数据集上各种度量学习方法的性能。
translated by 谷歌翻译
了解驾驶场景中的雾图像序列对于自主驾驶至关重要,但是由于难以收集和注释不利天气的现实世界图像,这仍然是一项艰巨的任务。最近,自我训练策略被认为是无监督域适应的强大解决方案,通过生成目标伪标签并重新训练模型,它迭代地将模型从源域转化为目标域。但是,选择自信的伪标签不可避免地会遭受稀疏与准确性之间的冲突,这两者都会导致次优模型。为了解决这个问题,我们利用了驾驶场景的雾图图像序列的特征,以使自信的伪标签致密。具体而言,基于顺序图像数据的局部空间相似性和相邻时间对应的两个发现,我们提出了一种新型的目标域驱动的伪标签扩散(TDO-DIF)方案。它采用超像素和光学流来识别空间相似性和时间对应关系,然后扩散自信但稀疏的伪像标签,或者是由流量链接的超像素或时间对应对。此外,为了确保扩散像素的特征相似性,我们在模型重新训练阶段引入了局部空间相似性损失和时间对比度损失。实验结果表明,我们的TDO-DIF方案有助于自适应模型在两个公共可用的天然雾化数据集(超过雾气的Zurich and Forggy驾驶)上实现51.92%和53.84%的平均跨工会(MIOU),这超过了最态度ART无监督的域自适应语义分割方法。可以在https://github.com/velor2012/tdo-dif上找到模型和数据。
translated by 谷歌翻译
大多数计算机视觉系统将无失真的图像作为输入。但是,当摄像机和对象在捕获过程中进行运动时,使用广泛使用的滚动器(RS)图像传感器会遭受几何变形。已经对纠正RS扭曲进行了广泛的研究。但是,大多数现有作品都严重依赖场景或动作的先前假设。此外,由于流动翘曲,运动估计步骤要么过于简单或计算效率低下,从而限制了它们的适用性。在本文中,我们使用全局重置功能(RSGR)使用滚动快门来恢复清洁全局快门(GS)视频。此功能使我们能够将纠正问题变成类似Deblur的问题,从而摆脱了不准确且昂贵的明确运动估计。首先,我们构建了一个捕获配对的RSGR/GS视频的光学系统。其次,我们开发了一种新型算法,该算法结合了空间和时间设计,以纠正空间变化的RSGR失真。第三,我们证明了现有的图像到图像翻译算法可以从变形的RSGR输入中恢复清洁的GS视频,但是我们的算法通过特定的设计实现了最佳性能。我们的渲染结果不仅在视觉上吸引人,而且对下游任务也有益。与最先进的RS解决方案相比,我们的RSGR解决方案在有效性和效率方面均优异。考虑到在不更改硬件的情况下很容易实现,我们相信我们的RSGR解决方案可以潜在地替代RS解决方案,以使用低噪音和低预算的无失真视频。
translated by 谷歌翻译
The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow 3D architectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) training resulted in significant overfitting for UCF-101, HMDB-51, and Ac-tivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. ResNeXt-101 achieved 78.4% average accuracy on the Kinetics test set. (iii) Kinetics pretrained simple 3D architectures outperforms complex 2D architectures, and the pretrained ResNeXt-101 achieved 94.5% and 70.2% on respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available1.
translated by 谷歌翻译